3 research outputs found

    Demonstration of Universal Parametric Entangling Gates on a Multi-Qubit Lattice

    Get PDF
    We show that parametric coupling techniques can be used to generate selective entangling interactions for multi-qubit processors. By inducing coherent population exchange between adjacent qubits under frequency modulation, we implement a universal gateset for a linear array of four superconducting qubits. An average process fidelity of F=93%\mathcal{F}=93\% is estimated for three two-qubit gates via quantum process tomography. We establish the suitability of these techniques for computation by preparing a four-qubit maximally entangled state and comparing the estimated state fidelity against the expected performance of the individual entangling gates. In addition, we prepare an eight-qubit register in all possible bitstring permutations and monitor the fidelity of a two-qubit gate across one pair of these qubits. Across all such permutations, an average fidelity of F=91.6±2.6%\mathcal{F}=91.6\pm2.6\% is observed. These results thus offer a path to a scalable architecture with high selectivity and low crosstalk

    Low-dimensional Representations of Semantic Context in Language and the Brain

    No full text
    We study the problem of finding low-dimensional shared representations of meaning for natural language and brain response modalities for multiple-subject narrative story datasets (a portion of an episode of the Sherlock television program and a chapter of a Harry Potter book). These datasets have paired fMRI responses and textual descriptions. Our first goal is to determine if any fMRI space can be learned across subjects that correlates well with semantic context vectors derived from recent, unsupervised methods in natural language understanding for embedding word meaning in Rn. Can distributed, low-dimensional representations of narrative context predict voxels? Our second goal is to determine if a shared space between the fMRI voxels and the semantic word embeddings exists which can be purposed to decode brain states into coherent textual representations of thought. First, we were able to construct a fine-grained 300-dimensional embedding of the semantic context induced by a scene annotation dataset for Sherlock. Our primary positive result in this thesis is that the multi-view Shared Response Model produces a semantically relevant 20-dimensional space using views of multiple subjects watching Sherlock. This lowdimensional shared fMRI space is able to match fMRI responses to scenes with performance considerably above chance. Using the fMRI shared space over individual fMRI responses brings a large improvement in reconstructing voxels from semantic vectors, and suggests that other recent work in this area may benefit from applying the Shared Response Mode

    Deep learning for object detection and scene perception in self-driving cars: Survey, challenges, and open issues

    No full text
    corecore